List of Flash News about Huawei chips
| Time | Details |
|---|---|
|
2026-01-14 22:10 |
China AI Push: GLM-Image Trained on Huawei Chips as Beijing Moves to Block Nvidia H200 Imports — Crypto Market Impact Explained
According to the source, GLM-Image has been released and was trained entirely on Huawei chips, indicating a domestic compute stack, source: the provided social media post dated Jan 14, 2026. The same source states Beijing is moving to block Nvidia H200 imports to advance AI self-reliance, source: the provided social media post dated Jan 14, 2026. For crypto traders, the immediate market impact appears limited because the source does not reference any blockchain integrations or tokens, source: the provided social media post dated Jan 14, 2026. |
|
2025-10-22 04:00 |
DeepSeek v3.2 685B MoE Cuts AI Inference Costs 6–7x and Speeds Long-Context 2–3x; MIT-Licensed and Huawei-Optimized — Trading Takeaways for AI Infrastructure
According to @DeepLearningAI, DeepSeek’s new 685B MoE v3.2 attends only to the most relevant tokens and delivers 2–3x faster long-context inference versus v3.1. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, processing is 6–7x cheaper than v3.1 and the API is priced at $0.28/$0.028/$0.42 per 1M input/cached/output tokens. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, the model weights are MIT-licensed and optimized for Huawei and other China chips, enabling broader deployment options in China-based compute. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, performance is broadly similar to v3.1 with small gains on coding and agentic tasks and slight dips on some science and math benchmarks. Source: @DeepLearningAI, Oct 22, 2025. According to @DeepLearningAI, these disclosed cost and latency metrics provide a concrete benchmark traders can use to track pricing pressure and efficiency trends across AI infrastructure, decentralized compute, and on-chain agent tooling sectors. Source: @DeepLearningAI, Oct 22, 2025. |